子图相似度搜索是图形分析中的基本操作员。在此框架中,给定查询图和图形数据库,目标是识别结构图的数据库图的子图,这些图是与查询相似的。子图编辑距离(SED)是子图相似度最有表现力的措施之一。在这项工作中,我们研究了从训练的图形对和他们的SED值训练SED的问题。为此,我们设计了一种名为Neurosed的新型暹罗图形神经网络,其学习嵌入空间,具有丰富的结构,让人想起SED。借助专门制作的归纳偏差,不仅可以实现高精度,而且确保预测的SED,如真正的SED,满足三角不等式。设计足够通用,也可以模拟图表编辑距离(GED),同时确保预测的GED空间是指标,如真正的GED空间。对于SED和GED的真实图数据集进行了广泛的实验,建立了神经传播的RMSE比现有技术的约2倍,并且比最快的基线快约18倍。此外,由于其对独立的嵌入和理论性质,神经翻转允许大约3个峰值检索图形和子图。
translated by 谷歌翻译
Recent methods demonstrate that data augmentation using counterfactual knowledge can teach models the causal structure of a task, leading to robust and generalizable models. However, such counterfactual data often has a limited scale and diversity if crowdsourced and is computationally expensive to extend to new perturbation types if generated using supervised methods. To address this, we introduce a new framework called DISCO for automatically generating high-quality counterfactual data at scale. DISCO engineers prompts to generate phrasal perturbations with a large general language model. Then, a task-specific teacher model filters the generation to distill high-quality counterfactual data. We show that learning with this counterfactual data yields a comparatively small student model that is 6% (absolute) more robust and generalizes 5% better across distributions than baselines on various challenging evaluations. This model is also 15% more sensitive in differentiating original and counterfactual examples, on three evaluation sets written by human workers and via human-AI collaboration.
translated by 谷歌翻译
Recent work has shown that large language models are capable of generating natural language reasoning steps or Chains-of-Thoughts (CoT) to answer a multi-step question when prompted to do so. This is insufficient, however, when the necessary knowledge is not available or up-to-date within a model's parameters. A straightforward approach to address this is to retrieve text from an external knowledge source using the question as a query and prepend it as context to the model's input. This, however, is also insufficient for multi-step QA where \textit{what to retrieve} depends on \textit{what has already been derived}. To address this issue we propose IRCoT, a new approach that interleaves retrieval with CoT for multi-step QA, guiding the retrieval with CoT and in turn using retrieved results to improve CoT. Our experiments with GPT3 show substantial improvements in retrieval (up to 22 points) and downstream QA (up to 16 points) over the baselines on four datasets: HotpotQA, 2WikiMultihopQA, MuSiQue, and IIRC. Notably, our method also works well for much smaller models such as T5-Flan-large (0.7B) without any additional training.
translated by 谷歌翻译
Characterizing the implicit structure of the computation within neural networks is a foundational problem in the area of deep learning interpretability. Can their inner decision process be captured symbolically in some familiar logic? We show that any transformer neural network can be translated into an equivalent fixed-size first-order logic formula which may also use majority quantifiers. The idea is to simulate transformers with highly uniform threshold circuits and leverage known theoretical connections between circuits and logic. Our findings also reveal the surprising fact that the entire transformer computation can be reduced merely to the division of two (large) integers. While our results are most pertinent for transformers, they apply equally to a broader class of neural network architectures, namely those with a fixed-depth uniform computation graph made up of standard neural net components, which includes feedforward and convolutional networks.
translated by 谷歌翻译
抑郁症的心理运动迟缓与二元临床访谈中的语音时机变化有关。在这项工作中,我们研究了自由生活二元相互作用的语音定时特征。除了进行连续监测以补充临床就诊的可能性外,在自由生活条件下进行的研究还可以推断社交特征,例如与抑郁症有关的二元相互作用频率。我们将扬声器计数估计量调整为二元相互作用检测器,特异性为89.5%,在Dihard数据集中的灵敏度为86.1%。使用探测器,我们从32名参与者的多天音频记录中获得了语音定时特征,该记录由13位健康个体,11个患有抑郁症的人和8个患有精神疾病的人组成。没有或轻度抑郁的参与者的二元相互作用频率随着抑郁的严重程度而增加,表明潜在的抑郁症发作标记。但是,中度或重度抑郁症的参与者的二元相互作用频率随着抑郁严重程度的增加而降低。在语音时序特征方面,响应时间与抑郁严重程度有显着的正相关。我们的工作表明了自由生活的音频记录的二元相互作用分析的潜力,以获得抑郁严重程度的标记。
translated by 谷歌翻译
我们证明,可以通过恒定的深度统一阈值电路模拟输入长度中具有对数精度的变压器神经网络(以及使用输入长度中的线性空间计算的FeedForward子网络)。因此,此类变压器仅在$ \ mathsf {tc}^0 $中识别形式语言,这是由常数深度,多大小阈值电路定义的语言类。这证明了NLP中的实际主张与计算复杂性理论中的理论猜想之间的联系:“注意就是您需要的一切”(Vaswani等,2017),即,只有在所有有效地计算的情况下,变形金刚都能够进行所有有效的计算可以使用日志空间来解决问题,即$ \ mathsf l = \ mathsf p $。我们还构建了一个可以在任何输入上评估任何恒定深度阈值电路的变压器,证明变形金刚可以遵循$ \ Mathsf {tc}^0 $中表示的说明。
translated by 谷歌翻译
语言模型既展示了定量的改进,又展示了新的定性功能,随着规模的增加。尽管它们具有潜在的变革性影响,但这些新能力的特征却很差。为了为未来的研究提供信息,为破坏性的新模型能力做准备,并改善社会有害的效果,至关重要的是,我们必须了解目前和近乎未来的能力和语言模型的局限性。为了应对这一挑战,我们介绍了超越模仿游戏基准(Big Bench)。 Big Bench目前由204个任务组成,由132家机构的442位作者贡献。任务主题是多样的,从语言学,儿童发展,数学,常识性推理,生物学,物理学,社会偏见,软件开发等等。 Big-Bench专注于被认为超出当前语言模型的功能的任务。我们评估了OpenAI的GPT型号,Google内部密集变压器体系结构和大型基础上的开关稀疏变压器的行为,跨越了数百万到数十亿个参数。此外,一个人类专家评估者团队执行了所有任务,以提供强大的基准。研究结果包括:模型性能和校准都随规模改善,但绝对的术语(以及与评估者的性能相比);在模型类中的性能非常相似,尽管带有稀疏性。逐渐和预测的任务通常涉及大量知识或记忆成分,而在临界规模上表现出“突破性”行为的任务通常涉及多个步骤或组成部分或脆性指标;社交偏见通常会随着含糊不清的环境而随着规模而增加,但这可以通过提示来改善。
translated by 谷歌翻译
Question-answering datasets require a broad set of reasoning skills. We show how to use question decompositions to teach language models these broad reasoning skills in a robust fashion. Specifically, we use widely available QDMR representations to programmatically create hard-to-cheat synthetic contexts for real questions in six multi-step reasoning datasets. These contexts are carefully designed to avoid reasoning shortcuts prevalent in real contexts that prevent models from learning the right skills. This results in a pretraining dataset, named TeaBReaC, containing 525K multi-step questions (with associated formal programs) covering about 900 reasoning patterns. We show that pretraining standard language models (LMs) on TeaBReaC before fine-tuning them on target datasets improves their performance by up to 13 F1 points across 4 multi-step QA datasets, with up to 21 point gain on more complex questions. The resulting models also demonstrate higher robustness, with a 5-8 F1 point improvement on two contrast sets. Furthermore, TeaBReaC pretraining substantially improves model performance and robustness even when starting with numerate LMs pretrained using recent methods (e.g., PReasM, POET). Our work thus shows how to effectively use decomposition-guided contexts to robustly teach multi-step reasoning.
translated by 谷歌翻译
调查变压器模型的推理能力,并为他们发现新的具有挑战性的任务,这是一个非常感兴趣的主题。最近的研究发现这些模型在表演自然语言表达的正式逻辑理论上表现出令人惊讶的强烈。然而,这些研究的缺点是他们没有考虑到逻辑理论,当随机均匀抽样时,不一定导致硬实例。我们提出了一种新的方法,用于创建挑战算法推理数据集,其专注于自然语言可满足性(NLSAT)问题。关键的想法是利用良好命题SAT问题的经验采样以及语言的复杂性学习的洞察力。这种方法允许我们轻松地从硬实例区分,并系统地提高Ruletaker等现有推理基准的复杂性。我们发现,鉴于足够的训练数据,当前的变压器令人惊讶地稳健地解决了产生的NLSAT基本上增加的难度问题。它们还表现出一定程度的规模不变性 - 概括到更大尺寸和范围的问题的能力。然而,我们的结果也揭示了重要的局限性:仔细的培训数据采样对于建立更大问题的模型来说至关重要,变压器模型的“有限的规模不变性”表明他们远非学习强大的演绎推理算法。
translated by 谷歌翻译
Fine Tuning Target Tasks的连续提示最近被出现为完整模型微调的紧凑替代方案。这些有前途的结果的动机,我们调查了提取离散(文本)解释的可行性,持续提示忠于他们解决的问题。在实践中,我们在通过连续提示和最近的邻离分立投影解决的任务之间的“任性”行为:我们可以找到解决任务的连续提示,同时投射到任意文本(例如,不同甚至a的定义矛盾的任务),而在最佳连续提示的非常小(2%)的边缘内,对于任务相同的相同尺寸。我们提供这种奇怪和令人惊讶的行为背后的直觉,以及广泛的实证分析量化各种参数的效果。例如,对于更大的模型大小,我们观察到更高的任性,即,我们可以发现提示更紧密地映射到任何随意的任意文本,精度较小。这些调查结果与忠实地解释模型和任务持续提示及其概括的难度有关的重要意义,为提示语言模型的未来进展提供指导。
translated by 谷歌翻译